Mohave County
Inspecting and Editing Knowledge Representations in Language Models
Hernandez, Evan, Li, Belinda Z., Andreas, Jacob
Neural language models (LMs) represent facts about the world described by text. Sometimes these facts derive from training data (in most LMs, a representation of the word "banana" encodes the fact that bananas are fruits). Sometimes facts derive from input text itself (a representation of the sentence "I poured out the bottle" encodes the fact that the bottle became empty). We describe REMEDI, a method for learning to map statements in natural language to fact encodings in an LM's internal representation system. REMEDI encodings can be used as knowledge editors: when added to LM hidden representations, they modify downstream generation to be consistent with new facts. REMEDI encodings may also be used as probes: when compared to LM representations, they reveal which properties LMs already attribute to mentioned entities, in some cases making it possible to predict when LMs will generate outputs that conflict with background knowledge or input text. REMEDI thus links work on probing, prompting, and LM editing, and offers steps toward general tools for fine-grained inspection and control of knowledge in LMs.
- North America > United States > Minnesota > Hennepin County > Minneapolis (0.14)
- Europe > Germany > Bavaria > Upper Bavaria > Munich (0.04)
- Asia > Japan (0.04)
- (23 more...)
- Law (1.00)
- Leisure & Entertainment > Sports (0.68)
- Education > Educational Setting > Higher Education (0.67)
- (2 more...)
- Information Technology > Artificial Intelligence > Representation & Reasoning (1.00)
- Information Technology > Artificial Intelligence > Machine Learning (1.00)
- Information Technology > Artificial Intelligence > Cognitive Science > Problem Solving (1.00)
- Information Technology > Artificial Intelligence > Natural Language > Large Language Model (0.94)
Mention and Entity Description Co-Attention for Entity Disambiguation
Nie, Feng (Sun-Yat-Sen University, Guangzhou) | Cao, Yunbo (Tencent Corporation, Beijing) | Wang, Jinpeng (Microsoft Research Asia) | Lin, Chin-Yew (Microsoft Research Asia) | Pan, Rong (Sun-Yat-Sen University, Guangzhou)
For the task of entity disambiguation, mention contexts and entity descriptions both contain various kinds of information content while only a subset of them are helpful for disambiguation. In this paper, we propose a type-aware co-attention model for entity disambiguation, which tries to identify the most discriminative words from mention contexts and most relevant sentences from corresponding entity descriptions simultaneously. To bridge the semantic gap between mention contexts and entity descriptions, we further incorporate entity type information to enhance the co-attention mechanism. Our evaluation shows that the proposed model outperforms the state-of-the-arts on three public datasets. Further analysis also confirms that both the co-attention mechanism and the type-aware mechanism are effective.
- Europe > United Kingdom > England (0.05)
- North America > United States > Colorado (0.05)
- North America > United States > Utah (0.04)
- (4 more...)